perm filename NSF[W88,JMC]3 blob
sn#852915 filedate 1988-02-03 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00007 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 %nsf[w88,jmc] 1988 NSF Basic Research in AI Proposal
C00003 00003 Introduction
C00007 00004 All the formalized systems of non-monotonic reasoning
C00008 00005 The present proposal is for continuing a line of research in
C00014 00006 During the next three years we propose to explore the following
C00015 00007 2. Mental situations and hill climbing in mental situation space.
C00016 ENDMK
C⊗;
%nsf[w88,jmc] 1988 NSF Basic Research in AI Proposal
topics:
Second order logic and the model theory of first order logic.
contexts
nonmonotonic logic
haven't succeeded yet
more reification
Introduction
This is a proposal for renewed support of basic research in
artificial intelligence by John McCarthy and students with the addition
of work by Vladimir Lifschitz, a Senior Research Associate, starting
in January 1990. Most of the work is in the epistemological part of
AI, concentrating on developing formal methods of non-monotonic reasoning
and using them to express common sense knowledge and reasoning. Also
McCarthy has recently returned to the subject of heuristics and their
formal expression, and work will continue in this direction, depending
on how successful it is.
NEEDS A SUMMARY OF WHY NON-MONOTONIC REASONING IS IMPORTANT. IT CAN
BE TAKEN FROM APPLICATIONS OF circumscription.
Previous Work and its Present Status
(McCarthy 1977) contains the first published work on formalized
non-monotonic reasoning, introducing a version of circumscription that
is now mostly obsolete. A conference on non-monotonic reasoning was
held at Stanford in 1978, and {\it Artificial Intelligence} devoted
as special issue to the subject in 1980 including (McCarthy 1980)
(McDermott and Doyle 1980) and (Reiter 1980). The first paper introduced
predicate circumscription, the second introduced a now obsolete
``Non-monotonic Logic'' and the third introduced a ``Logic of Defaults''.
These papers started an active branch of research that has resulted
already in several hundred papers. The 1984 AAAI sponsored conference
on non-monotonic reasoning was another landmark in the field. (McCarthy
1986), which further developed circumscription, was based on a paper presented
at that conference. Robert Moore's (198x) autoepistemic logic is
a newer significant non-monotonic formalism.
The basic methods of non-monotonic have been elaborated and
varied by many people including
Vladimir Lifschits,
John McCarthy,
Michael Gelfond,
Donald Perlis,
Jack Minker,
Kurt Konolige,
Robert Moore,
Raymond Reiter,
Many people have also worked on computational aspects of non-monotonic
reasoning including
All the formalized systems of non-monotonic reasoning
start from a collection $A$ sentences and infer further sentences.
However, the inferred sentences aren't always true in all models
of $A$. There are three basic ways of determining what sentences
are inferrable.
1. Some methods accept certain sentences because their
negations are unprovable. In Prolog, negation as failure
accepts a sentence $¬p$ when a certain standard effort to
prove $p$ has failed.
The present proposal is for continuing a line of research in
artificial intelligence based on the use of mathematical logic. It
started in the late 1950s. We will first explain the general approach,
not assuming that the reader is familiar with previous papers, next the
results of the last few years, next the current scientific situation in
AI, and finally the problems we propose to study in the next three years.
\section{Mathematical Logic in AI}
The mathematical logic approach to AI involves the following.
1. The use of mathematical logical languages for expressing facts
about the world in the memory of a computer. The most important kinds of
facts are those describing the current situations, the goals to be achieved
and the consequences of actions.
2. The use of logical inference to reach conclusions, especially
conclusions about what the system should do to achieve its goals. In early
years the only kind of logical inference that had been formalized was
logical deduction was available, but since the late 1970s
non-monotonic reasoning has been formalized in various ways and can
also be used. The system tries to find an action strategy that it
can infer is appropriate to achieve its goals.
3. The actions that can be performed include mental actions
such as inference and observation. The logic approach will be
complete only when these actions and their effects are also described
by logical formulas and logical inference is also used to decide
what mental actions to perform. Although the need for this was
discussed in (McCarthy 1960), the right formalism still hasn't been
found.
A non-mathematical (enforced by the editor) discussion is in
(McCarthy 1988a) included with this proposal as Appendix A.
\section{Recent results of the logic approach}
The logic approach has become more popular since the late 1970s
and many problems have been explored. The biggest area of activity is
non-monotonic reasoning. Besides circumscription described in (McCarthy
1980,1986), there are Reiter's logic of defaults, McDermott's
non-monotonic logic and Moore's autoepistemic logic. Many authors, e.g.
Lifschitz, Gelfond, Clark, Doyle and DeKleer, have treated the methods of
carrying out non-monotonic reasoning in the computer.
Two important areas of application of logical formalisms to
expressing common sense knowledge are expressing the facts about
the consequences of physical actions and expressing the facts about
people's knowledge, especially what can be inferred about what they
know or don't know on the basis of assumptions about their base
knowledge and what kinds of reasoning they can do. The former
is treated, for example in (McCarthy 1970 and 1986) and Kowalski (1985).
The latter is treated in (McCarthy 1979) and (Halpern 19xx, etc.)
Non-monotonic reasoning has permitted inferring the effects
of actions from much weaker premises than are needed when deduction
is the only tool available. This is discussed in (McCarthy 1986),
where a formalism using {\it abnormality predicates} is introduced
in combination with circumscription. Othere authors have made considerable
use of the abnormality predicates. A difficulty with the non-monotonic
reasoning was independently discovered by Lifschitz and by Hanks and
McDermott. This difficulty gives rise to unintended minimal models
of abnormality in the circumscription formalism and also to unintended
models when the logic of defaults is used.
Several formalisms that get around the difficulties have
been discovered by Lifschitz (198x), Shoham (198x), Gelfond (198x), etc.
However, none of these solutions seems entirely satisfactory,
because they lack a sufficient degree of {\it elaboration tolerance}.
During the next three years we propose to explore the following
ideas.
1. Elaboration tolerance.
2. Formalization of context.
The logical languages used in studying mathematics and
used so far in studying AI share a common limitation. The designer
of the language has a limited domain in mind, e.g. the natural
numbers or the blocks on a table and the actions that can be
carried out with the blocks.
2. Mental situations and hill climbing in mental situation space.
3. Non-monotonic reasoning using increased reification of
mental entities.
Consider the Yale shooting problem. From the human point of
view, it is clear why the preferred model of the axioms. A reason
why Fred should die is stated; he is shot. No reason why the gun
should have become unloaded is given, although it might have.